Goto

Collaborating Authors

 autonomous weapon


What Anthropic's Clash With the Pentagon Is Really About

The Atlantic - Technology

What Anthropic's Clash With the Pentagon Is Really About Who will take responsibility for the technology? The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government's actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind--including Google's chief scientist, Jeff Dean--signed an amicus brief in support of Anthropic, in essence lending support to one of their employers' greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD). For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm's AI systems.


OpenAI Is Opening the Door to Government Spying

The Atlantic - Technology

Outside OpenAI's headquarters, a handful of people gathered on Monday holding pieces of colorful chalk. They got down on their knees and started writing messages on the sidewalk. Please no legal mass surveillance. At issue was a business deal that the company recently signed with the Department of Defense, following the Pentagon's sudden turn against Anthropic . OpenAI will now supply its technology to the military for use in classified settings, the sorts that may involve wartime decisions and intelligence-gathering--an agreement, many legal experts told me, that could give the government wide-ranging powers.


Syria's leader says his country has transformed from 'an exporter of crisis.'

NYT > Middle East

On Wednesday, officials and diplomats sounded the alarm on A.I.'s ability to undermine the integrity of information and fabricate fake voice and video tapes. They also warned that it posed a threat to cybersecurity and would enable the rise of autonomous weapons. Still, some argued that, if used responsibly and with guardrails, A.I. potentially could also help foster peace and stability. Secretary General António Guterres, who for the past year has championed efforts to regulate A.I., said that the Council had a responsibility to ensure the military use of artificial intelligence complies with international law and the U.N. Charter. "From design to deployment to decommissioning, A.I. systems must always comply with international law; military uses must be clearly regulated," Mr. Guterres said, before ending his speech with a warning and a call to action.


Fox News AI Newsletter: Holy See calls for end to autonomous weapons

FOX News

Fox News chief political anchor Bret Baier has the latest on the pros and cons of the bombshell developments on'Special Report.' The Vatican flag flies outside the United Nations headquarters on Sept. 25, 2015, in New York City. 'PROPER HUMAN CONTROL': A delegation representing the Holy See urged the United Nations this week to put a moratorium on autonomous weapons designed to kill without human decision-making. 'INSANE': Canva is facing pushback from customers over plans to increase subscription prices by more than 300% in some instances. United Nations Headquarters in New York City is seen flanked by Hamas and Hezbollah fighters.


AI's 'Oppenheimer moment': autonomous weapons enter the battlefield

The Guardian

A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones' ability to "maximize lethality and combat tempo". While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.


The Promise and Peril of AI

TIME - Tech

In early 2023, following an international conference that included dialogue with China, the United States released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of "human control" itself is hazier than it might seem. If humans authorized a future AI system to "stop an incoming nuclear attack," how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes. We need to recognize the fact that AI technologies are inherently dual-use.


ChatGPT maker quietly changes rules to allow the US military to incorporate its technology

Daily Mail - Science & tech

OpenAI, the maker of ChatGPT, has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes - and revealed that it is already working with the Department of Defense. Experts have previously voiced fears that AI could escalate conflicts around the world thanks to'slaughterbots' which can kill without any human intervention. The rule change, which occurred after Wednesday last week, removed a sentence which said that the company would not permit usage of models for'activity that has high risk of physical harm, including: weapons development, military and warfare.' The spokesman said: 'Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. 'There are, however, national security use cases that align with our mission.


Pentagon moving to ensure human control so AI doesn't 'make the decision for us'

FOX News

Naftali Bennett spoke exclusively with Fox News Digital about the benefits of AI and the need to set parameters for its use now. The U.S. military is embracing artificial intelligence as a tool for quickly digesting data and helping leaders make the right decision – and not to make those decisions for the humans in charge, according to two top AI advisors in U.S. Central Command. CENTCOM, which is tasked with safeguarding U.S. national security in the Middle East and Southeast Asia, just hired Dr. Andrew Moore as its first AI advisor. Moore is the former director of Google Cloud AI and former dean of the Carnegie Mellon University School of Computer Science, and he'll be working with Schuyler Moore, CENTCOM's chief technology officer. In an interview with Fox News Digital, they both agreed that while some are imagining AI-driven weapons, the U.S. military aims to keep humans in the decision-making seat, and using AI to assess massive amounts of data that helps the people sitting in those seats.


'Eyes and ears': Could drones prove decisive in the Ukraine war?

Al Jazeera

Warning: Some readers may find some of the scenes described in this article disturbing. Kyiv, Ukraine – Ivan Ukraintsev, a stern-faced insurance broker turned director of a wartime charity providing crucial aid to Ukraine's military forces, is on a mission: to help Ukraine win the drone war. He is a polite but no-nonsense character, and he is here to talk about drones. "If we [Ukraine] had enough drones, we could end this war in two months," he says firmly. Ivan, who heads up the charity Starlife, had recently returned from overseeing a drone delivery to Bakhmut, a city in eastern Ukraine that has become the focal point for months of bloody battles between Ukrainian and Russian forces. Trench warfare, pockmarked and corpse-ridden swathes of no man's land, and constant artillery bombardments have drawn comparisons to battlefield conditions during World War I.


HELPFUL OR HOMICIDAL -- HOW DANGEROUS IS ARTIFICIAL INTELLIGENCE (AI)? - Dying Words

#artificialintelligence

AI is great for what I do--create content for the entertainment industry--and I have no plans to use AI for world domination. Not like a character I'm basing-on for my new series titled City Of Danger. It's a work in progress set for release this fall--2022. I didn't invent the character. I didn't have to because he exists in real life, and he's a mover and shaker behind many world economic and technological advances including promoting artificial intelligence. His name is Klaus Schwab.